Escaping the Local Minima via Simulated Annealing: Optimization of Approximately Convex Functions
نویسندگان
چکیده
We consider the problem of optimizing an approximately convex function over a bounded convex set in Rn using only function evaluations. The problem is reduced to sampling from an approximately log-concave distribution using the Hit-and-Run method, which is shown to have the same O∗ complexity as sampling from log-concave distributions. In addition to extend the analysis for log-concave distributions to approximate log-concave distributions, the implementation of the 1dimensional sampler of the Hit-and-Run walk requires new methods and analysis. The algorithm then is based on simulated annealing which does not relies on first order conditions which makes it essentially immune to local minima. We then apply the method to different motivating problems. In the context of zeroth order stochastic convex optimization, the proposed method produces an 2-minimizer after O∗(n7.52−2) noisy function evaluations by inducing a O (2/n)-approximately log concave distribution. We also consider in detail the case when the “amount of non-convexity” decays towards the optimum of the function. Other applications of the method discussed in this work include private computation of empirical risk minimizers, two-stage stochastic programming, and approximate dynamic programming for online learning.
منابع مشابه
Modeling the optical constants of solids using acceptance-probability-controlled simulated annealing with an adaptive move generation procedure
The acceptance-probability-controlled simulated annealing with an adaptive move generation procedure, an optimization technique derived from the simulated annealing algorithm, is presented. The adaptive move generation procedure was compared against the random move generation procedure on seven multiminima test functions, as well as on the synthetic data, resembling the optical constants of a m...
متن کاملSimulated Annealing for Convex Optimization
We apply the method known as simulated annealing to the following problem in convex optimization: minimize a linear function over an arbitrary convex set, where the convex set is specified only by a membership oracle. Using distributions from the Boltzmann-Gibbs family leads to an algorithm that needs only O∗( √ n) phases for instances in R. This gives an optimization algorithm that makes O∗(n4...
متن کاملTabu-KM: A Hybrid Clustering Algorithm Based on Tabu Search Approach
The clustering problem under the criterion of minimum sum of squares is a non-convex and non-linear program, which possesses many locally optimal values, resulting that its solution often falls into these trap and therefore cannot converge to global optima solution. In this paper, an efficient hybrid optimization algorithm is developed for solving this problem, called Tabu-KM. It gathers the ...
متن کاملHybrid of Particle Swarm Optimization, Simulated Annealing and Tabu Search for the Reconstruction of Two-dimensional Targets from Laboratory-controlled Data
Recently, the use of the particle swarm optimization (PSO) technique for the reconstruction of microwave images has received increasing interest from the optimization community due to its simplicity in implementation and its inexpensive computational overhead. However, the basic PSO algorithm is easily trapping into local minimum and may lead to the premature convergence. When a local optimal s...
متن کاملV Annealing by Stochastic Neural Networks for Optimization
Two major classes of optimization techniques are the deterministic gradient methods and stochastic annealing methods. Gradient descent algorithms are greedy algorithms, which are subject to a fundamental limitation of being easily trapped in local minima of the cost function. Hopfield networks usually converge to a local minimum of energy function. Because of its deterministic input-output rela...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015